day 7: multilevel models

multilevel tadpoles

review (or introduction?)

(Ignore probability and estimation for now)

\[ Y_i = b_0 + b_1X_{1i} + ... + \epsilon_i \] This assumes each observation \((i)\) is independent of all other observations. But what if our data were clustered in some way? We might be able to express each observation \(i\) as belonging to a specific cluster, \(j\).

\[ Y_{ij} = b_0 + b_1X_{1ij} + ... + \epsilon_{ij} \]

But what is the problem with this?

multilevel models

We’re starting our unit on multilevel models, which can be thought of as models that “remember” features of clusters of data as they learn about all the clusters. The model will pool information across clusters (e.g., our estimates about cluster A will be informed in part by clusters B, C, and D). This tends to improve estimates about each cluster. Here are some other benefits of multilevel modeling:

  1. improved estimates for repeated sampling. If you try to fit a single-level model to these data, you’ll over- or under-fit the data.
  2. improved estimates for imbalance in sampling. prevent over-sampled clusters from dominating inference, while also balancing the fact that larger clusters have more information.
  3. estimates of variation. model variation explicitly!
  4. avoid averaging, retain variation. averaging manufactures false confidence (artificially inflates precision) and introduces arbitrary data transformations.

Multilevel modeling should be your default approach.

(Still non-Bayesian)

Intercept-only model

\[\begin{align*} \text{Level 1}&\\ Y_{ij} &= \beta_{0j} + \epsilon_{ij}\\ \text{Level 2}&\\ \beta_{0j} &= \gamma_{00} + U_{0j} \\ U_{0j} &\sim \text{Normal}(0, \tau^2_{00}) \\ \epsilon_{ij} &\sim \text{Normal}(0, \sigma^2) \\ \end{align*}\]

We can rewrite this as the “combined” model:

\[ Y_{ij} = \gamma_{00} + U_{0j} + \epsilon_{ij}\\ \] Level 1 is where you have data that repeats within your grouping or clustering data. Is your cluster classrooms? Then students are Level 1. Is your cluster people? Then observations are Level 1.

Level 2 takes the parameters at Level 1 and decomposes them into a fixed component \((\gamma)\) that reflects the average and, if desired, the individual deviations around that fixed effect \((U)\).

\[\begin{align*} \text{Level 1}&\\ Y_{ij} &= \beta_{0j} + \epsilon_{ij}\\ \text{Level 2}&\\ \beta_{0j} &= \gamma_{00} + U_{0j} \\ U_{0j} &\sim \text{Normal}(0, \tau^2_{00}) \\ \epsilon_{ij} &\sim \text{Normal}(0, \sigma^2) \\ \end{align*}\]

Each person’s score is therefore divided into the “fixed” and “random” components:

\[\begin{align*} Y_{i,16} &= \beta_{0, 16} + \epsilon_{ij} \\ ... \\ B_{0, 16} &= \gamma_{00} + U_{0,16} \end{align*}\]

(Random noise is assumed to vary constantly across observations across participants (in this model).)

example: multilevel people

iFit study

  • 193 participants recruited from a commercial weight loss study as well as the general population in two California counties
  • Wore a Fitbit Charge for 24 hours/day for at least 100 days
  • Daily diary survey about several variables, including affect (PANAS) resulting in a positive and negative affect score for each day.
data_path = "https://raw.githubusercontent.com/sjweston/uobayes/refs/heads/main/files/data/external_data/williams.csv"
d <- read.csv(data_path)
d %>% count(id) %>% select(n) %>% rethinking::precis()
     mean       sd 5.5% 94.5%  histogram
n 67.5285 28.01953   17    99 ▁▁▂▂▁▂▂▂▂▇
rethinking::precis(d)
                 mean         sd       5.5%      94.5%      histogram
id        98.61029694 63.7493291 10.0000000 207.000000   ▇▇▇▃▅▅▅▃▃▃▂▂
female     0.57016803  0.4950710  0.0000000   1.000000     ▅▁▁▁▁▁▁▁▁▇
PA.std     0.01438236  1.0241384 -1.6812971   1.751466       ▁▁▃▇▇▃▁▁
day       44.17962096 27.6985612  6.0000000  92.000000     ▇▇▇▇▅▅▅▃▃▃
PA_lag     0.01992587  1.0183833 -1.7204351   1.717036     ▁▂▃▅▇▇▅▃▂▁
NA_lag    -0.04575229  0.9833161 -0.8750718   1.990468 ▇▃▂▁▁▁▁▁▁▁▁▁▁▁
steps.pm   0.05424387  0.6298941 -1.0258068   1.011356       ▁▂▇▇▃▁▁▁
steps.pmd  0.02839160  0.7575395 -1.1235951   1.229974 ▁▁▇▇▁▁▁▁▁▁▁▁▁▁
NA.std    -0.07545093  0.9495660 -1.0484293   1.811061     ▁▁▁▇▂▁▁▁▁▁
Code
set.seed(114)
sample_id = sample(unique(d$id), replace=F, size=6)
d %>%
  filter(id %in% sample_id) %>% 
  mutate(id = factor(id)) %>% 
  ggplot(aes(x = day, y = PA.std, group = id, fill = id)) + 
  geom_point(aes(color = id)) + 
  geom_line( aes(color = id), alpha=.5) +
  labs(y="Positive Affect") + 
  facet_wrap(~id) +
  theme(legend.position = "none")

What if we wanted to estimate each person’s positive affect? One method would be to simply average scores for each person. This is what we refer to as COMPLETE POOILNG.

mean(d$PA.std)
[1] 0.01438236

Obviously this is not great. People differ, and we probably know we shouldn’t assume each participant has the same level of positive affect. (Although… isn’t this how we talk about people in an independent samples t test?) I think we can acknowledge that we lose a lot of information this way.

Another option would be to treat each person as a group and model scores as a function of group. This is what would be called NO POOLING.

\[\begin{align*} \text{PA.std}_i &\sim \text{Normal}(\mu_i,\sigma) \\ \mu_i &= \alpha_{\text{id}[i]} \\ \alpha_j &\sim \text{Normal}(0, 1.5) \text{ for }j=1,...,239 \\ \sigma &\sim \text{Exponential}(1) \end{align*}\]

d$id.f = as.factor(d$id)

m1 <- brm(
  data=d,
  family=gaussian,
  bf(PA.std ~ 0 + a,
     a ~ 0 + id.f,
     nl = TRUE),
  prior = c( prior(normal(0, 1.5), class=b, nlpar=a),
             prior(exponential(1), class=sigma)),
  iter=2000, warmup=1000, chains=4, cores=4, seed=9,
  file=here("files/models/m71.1")
)

You can download my model file here.

m1
 Family: gaussian 
  Links: mu = identity; sigma = identity 
Formula: PA.std ~ 0 + a 
         a ~ 0 + id.f
   Data: d (Number of observations: 13033) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Regression Coefficients:
          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
a_id.f1       0.08      0.06    -0.04     0.21 1.00    11270     2736
a_id.f2       1.94      0.06     1.82     2.06 1.00    10974     2899
a_id.f3      -0.03      0.06    -0.15     0.09 1.00     8695     3022
a_id.f4      -0.31      0.06    -0.43    -0.20 1.00     8990     2238
a_id.f5       1.12      0.14     0.84     1.40 1.00    12665     2989
a_id.f6      -0.53      0.07    -0.66    -0.39 1.00     7837     2500
a_id.f7       0.19      0.07     0.05     0.32 1.00    10511     2437
a_id.f8       2.13      0.06     2.00     2.25 1.00    10646     2581
a_id.f9       1.04      0.08     0.89     1.20 1.00     9015     2893
a_id.f10      0.51      0.06     0.39     0.62 1.00     9612     2709
a_id.f11      0.09      0.06    -0.03     0.21 1.00     9234     2953
a_id.f12      0.97      0.06     0.85     1.09 1.00     8591     2734
a_id.f13      0.35      0.06     0.23     0.47 1.00    10446     3255
a_id.f14     -0.12      0.06    -0.24     0.01 1.00    10432     2906
a_id.f15     -0.48      0.06    -0.60    -0.36 1.00    10630     2845
a_id.f16      0.79      0.07     0.67     0.92 1.00     9938     2910
a_id.f17     -0.07      0.07    -0.19     0.06 1.00     9661     2624
a_id.f20     -0.60      0.06    -0.73    -0.48 1.00     9270     2729
a_id.f23     -0.58      0.06    -0.70    -0.46 1.00    10089     3045
a_id.f24      1.97      0.06     1.86     2.09 1.00     7588     2825
a_id.f25      0.29      0.06     0.17     0.41 1.00     8913     2177
a_id.f26     -0.43      0.06    -0.55    -0.31 1.00     9631     2987
a_id.f27      2.02      0.06     1.89     2.14 1.00     8945     2670
a_id.f28     -1.83      0.07    -1.96    -1.71 1.00     9091     2541
a_id.f29     -0.09      0.06    -0.22     0.03 1.00    12304     2658
a_id.f30      0.92      0.06     0.80     1.04 1.00    11099     2580
a_id.f31     -0.48      0.06    -0.60    -0.36 1.00    10211     2559
a_id.f32      1.34      0.06     1.21     1.46 1.00     9726     2749
a_id.f33     -0.78      0.06    -0.90    -0.66 1.00    10981     2697
a_id.f34      0.30      0.06     0.19     0.42 1.00     8492     2520
a_id.f35     -0.15      0.06    -0.27    -0.02 1.00    10766     2704
a_id.f36      0.09      0.06    -0.03     0.21 1.00    13422     2339
a_id.f37      0.73      0.06     0.61     0.85 1.00    11831     2532
a_id.f38     -1.52      0.06    -1.64    -1.41 1.00     8925     2423
a_id.f39     -0.15      0.06    -0.27    -0.04 1.00     8167     2733
a_id.f40      0.25      0.06     0.12     0.37 1.00    10679     2932
a_id.f41      0.78      0.06     0.65     0.90 1.00     8825     2950
a_id.f42     -0.45      0.06    -0.58    -0.33 1.00     9401     2473
a_id.f43      0.43      0.08     0.28     0.58 1.00     9432     2655
a_id.f44     -0.52      0.06    -0.64    -0.39 1.00    10960     2713
a_id.f46      0.18      0.07     0.05     0.31 1.00    10528     2871
a_id.f47     -0.52      0.06    -0.64    -0.41 1.00    10959     2772
a_id.f48     -0.22      0.07    -0.35    -0.10 1.00     7910     2716
a_id.f49     -1.36      0.06    -1.48    -1.24 1.00     8662     2579
a_id.f50     -0.79      0.06    -0.91    -0.68 1.00     9687     2657
a_id.f51      1.77      0.06     1.65     1.90 1.00     8890     2653
a_id.f52     -0.11      0.06    -0.23     0.01 1.00    10873     2560
a_id.f53      0.31      0.06     0.19     0.42 1.00     9033     2564
a_id.f54      0.23      0.06     0.11     0.35 1.00    10638     2764
a_id.f55      1.74      0.06     1.63     1.86 1.00    10136     2617
a_id.f56     -0.21      0.09    -0.37    -0.04 1.00     9053     2606
a_id.f58     -1.44      0.08    -1.60    -1.28 1.00     9842     2312
a_id.f60      0.34      0.06     0.22     0.45 1.00    10447     2865
a_id.f64      0.88      0.07     0.75     1.01 1.00    10141     2394
a_id.f65      0.78      0.07     0.64     0.91 1.00    11076     2488
a_id.f67     -0.08      0.06    -0.21     0.04 1.00    11294     2590
a_id.f68      0.04      0.06    -0.09     0.16 1.00     9932     2665
a_id.f69     -0.81      0.06    -0.93    -0.69 1.00     8488     2789
a_id.f70      0.21      0.06     0.08     0.33 1.00     8768     2299
a_id.f72     -1.21      0.06    -1.32    -1.10 1.00     9975     2808
a_id.f73     -0.12      0.06    -0.24    -0.00 1.00    10602     2661
a_id.f75      0.01      0.06    -0.11     0.12 1.00     9736     2519
a_id.f76     -0.78      0.12    -1.02    -0.53 1.00     9026     2475
a_id.f80     -0.94      0.10    -1.14    -0.75 1.00    10759     2564
a_id.f81      0.29      0.06     0.17     0.41 1.00    10492     2699
a_id.f82      0.48      0.06     0.36     0.59 1.01     8959     2513
a_id.f83     -1.63      0.06    -1.74    -1.51 1.00    10706     2766
a_id.f85     -0.48      0.06    -0.61    -0.36 1.00     8873     2712
a_id.f86     -0.24      0.08    -0.41    -0.08 1.00     9621     2851
a_id.f88      1.10      0.07     0.95     1.24 1.00    11458     2660
a_id.f91     -0.23      0.06    -0.35    -0.12 1.00    12793     2750
a_id.f92     -0.39      0.06    -0.51    -0.27 1.00     7678     2507
a_id.f94      0.02      0.06    -0.10     0.13 1.00     7897     2806
a_id.f95     -0.01      0.07    -0.15     0.13 1.00    11740     2820
a_id.f96      0.34      0.06     0.22     0.46 1.00     8942     2517
a_id.f97     -0.50      0.06    -0.62    -0.38 1.00    11475     2683
a_id.f98     -0.03      0.06    -0.15     0.09 1.00    12370     2673
a_id.f99     -0.46      0.07    -0.60    -0.33 1.00     9260     2712
a_id.f100    -1.44      0.06    -1.56    -1.31 1.00    12439     2672
a_id.f101    -0.67      0.06    -0.79    -0.56 1.00    10542     2914
a_id.f102    -1.31      0.06    -1.43    -1.19 1.00    10056     2485
a_id.f104     1.34      0.06     1.22     1.46 1.00    11217     2417
a_id.f105    -1.10      0.06    -1.22    -0.97 1.00    13252     2519
a_id.f106     0.49      0.06     0.38     0.61 1.00     7821     2608
a_id.f107     0.57      0.06     0.45     0.69 1.00     9225     2433
a_id.f109     1.66      0.06     1.55     1.78 1.00     9725     2928
a_id.f110    -0.05      0.06    -0.17     0.07 1.00     8123     2753
a_id.f111     0.55      0.06     0.44     0.67 1.00     7923     2988
a_id.f112    -0.58      0.15    -0.87    -0.29 1.00     9583     2744
a_id.f113     0.24      0.06     0.12     0.35 1.00    10205     2840
a_id.f114    -0.70      0.07    -0.84    -0.56 1.00     9191     2473
a_id.f115     0.27      0.08     0.10     0.43 1.01    10034     2335
a_id.f116     0.04      0.06    -0.08     0.16 1.00     8985     2855
a_id.f117     0.45      0.12     0.20     0.69 1.00    10768     2864
a_id.f119     0.07      0.12    -0.16     0.29 1.00     8497     2771
a_id.f120    -0.21      0.08    -0.36    -0.05 1.00     9532     2146
a_id.f121     1.22      0.06     1.10     1.34 1.00    12291     2996
a_id.f123    -0.18      0.06    -0.30    -0.06 1.00    10283     2723
a_id.f125    -0.26      0.07    -0.39    -0.13 1.00     9338     2875
a_id.f127    -1.00      0.06    -1.12    -0.88 1.00    11082     2563
a_id.f128    -1.11      0.06    -1.23    -0.98 1.00    10116     2529
a_id.f129     0.57      0.10     0.39     0.75 1.00     8745     3141
a_id.f132     0.10      0.06    -0.02     0.22 1.00    10010     2707
a_id.f133     0.02      0.06    -0.11     0.14 1.00     8711     2583
a_id.f134     0.21      0.07     0.08     0.34 1.00    10237     2613
a_id.f135     1.03      0.21     0.62     1.44 1.00    11717     2849
a_id.f136    -0.18      0.07    -0.31    -0.05 1.00    11568     2788
a_id.f137     1.66      0.08     1.51     1.83 1.01    10422     2345
a_id.f138    -1.68      0.07    -1.81    -1.55 1.00    10819     2759
a_id.f139     0.14      0.07    -0.00     0.27 1.00     9378     2591
a_id.f140     0.06      0.07    -0.07     0.19 1.00    11997     2515
a_id.f142     1.34      0.07     1.20     1.48 1.00     9723     2380
a_id.f143    -1.15      0.07    -1.28    -1.02 1.00     9321     2730
a_id.f144    -0.26      0.12    -0.50    -0.02 1.00     9379     2805
a_id.f145    -1.34      0.08    -1.50    -1.17 1.00    10669     2879
a_id.f147     0.01      0.09    -0.17     0.20 1.00    10415     2557
a_id.f148    -0.05      0.07    -0.19     0.08 1.00     9780     2429
a_id.f149     0.90      0.07     0.76     1.04 1.00    12587     2651
a_id.f150     1.55      0.07     1.42     1.69 1.00    13210     2557
a_id.f151     0.69      0.07     0.55     0.83 1.00    13447     2697
a_id.f152    -0.83      0.07    -0.97    -0.68 1.00     9650     3028
a_id.f153    -1.35      0.07    -1.50    -1.21 1.00     9672     2439
a_id.f154    -0.77      0.08    -0.92    -0.63 1.00    10363     2583
a_id.f155    -0.39      0.08    -0.54    -0.24 1.00     9490     2736
a_id.f157     0.76      0.07     0.62     0.91 1.00    12031     2628
a_id.f158     0.76      0.08     0.61     0.92 1.00    11619     2615
a_id.f160    -0.17      0.08    -0.33    -0.00 1.00     8099     2270
a_id.f162     0.36      0.08     0.20     0.52 1.00    10579     2518
a_id.f163    -0.24      0.07    -0.38    -0.09 1.00     7979     2688
a_id.f164    -1.39      0.07    -1.53    -1.25 1.00    10235     2906
a_id.f165     1.12      0.07     0.98     1.26 1.00    11826     2540
a_id.f166     0.04      0.07    -0.10     0.19 1.00     9628     2716
a_id.f167    -0.46      0.07    -0.61    -0.32 1.00    11116     2846
a_id.f168    -0.26      0.07    -0.40    -0.12 1.00     8970     2591
a_id.f169     0.36      0.08     0.21     0.52 1.01     9863     2360
a_id.f170    -1.38      0.08    -1.53    -1.23 1.00    10634     2948
a_id.f171    -0.52      0.07    -0.67    -0.37 1.00    10290     2607
a_id.f173    -0.01      0.09    -0.18     0.16 1.00     8169     2938
a_id.f174    -0.12      0.09    -0.29     0.04 1.00    12217     2605
a_id.f176     0.43      0.10     0.22     0.63 1.00     8486     2533
a_id.f177    -0.04      0.10    -0.23     0.15 1.01     8429     2566
a_id.f178    -0.19      0.08    -0.34    -0.04 1.00    12379     2182
a_id.f181     0.68      0.08     0.52     0.84 1.00    10100     2429
a_id.f182     0.40      0.08     0.24     0.56 1.00    10363     2484
a_id.f183     0.61      0.08     0.46     0.76 1.00    11405     2813
a_id.f184    -1.61      0.08    -1.77    -1.45 1.00    10245     2776
a_id.f185     0.25      0.08     0.10     0.40 1.00     9114     2973
a_id.f186    -0.62      0.09    -0.80    -0.45 1.00     8718     2540
a_id.f188    -0.20      0.08    -0.36    -0.04 1.00    10469     2604
a_id.f189     0.26      0.10     0.07     0.45 1.00    11539     2702
a_id.f190    -1.72      0.08    -1.89    -1.56 1.00     8338     2445
a_id.f191    -0.06      0.09    -0.22     0.11 1.00    11183     2658
a_id.f194     0.02      0.09    -0.16     0.20 1.00     9761     2193
a_id.f195     1.43      0.08     1.26     1.60 1.00     8897     3002
a_id.f196     0.02      0.10    -0.17     0.20 1.00     7595     2485
a_id.f197     0.64      0.09     0.47     0.81 1.00    11572     2250
a_id.f198    -1.65      0.09    -1.82    -1.46 1.00     8803     3035
a_id.f199     0.46      0.09     0.28     0.63 1.00     9935     2806
a_id.f201     0.68      0.08     0.53     0.82 1.00     8205     2792
a_id.f202    -0.47      0.09    -0.64    -0.29 1.00    10762     3021
a_id.f203    -1.10      0.10    -1.30    -0.90 1.00    10729     2514
a_id.f204    -1.20      0.10    -1.40    -0.99 1.00     9415     2508
a_id.f205    -0.00      0.10    -0.19     0.19 1.00     9123     2452
a_id.f206    -0.18      0.09    -0.35    -0.01 1.00     9946     2997
a_id.f207    -0.41      0.10    -0.60    -0.22 1.00    12955     2891
a_id.f208     0.35      0.10     0.15     0.55 1.00    12311     2147
a_id.f209    -0.47      0.10    -0.67    -0.28 1.00     9681     2707
a_id.f211    -1.27      0.08    -1.43    -1.11 1.00     9841     2794
a_id.f212     0.12      0.10    -0.06     0.32 1.00    10195     2916
a_id.f213     0.17      0.12    -0.07     0.41 1.00    11124     2958
a_id.f214     0.42      0.11     0.21     0.64 1.00     8942     2451
a_id.f216    -0.06      0.13    -0.32     0.19 1.00    11152     2525
a_id.f217     0.03      0.10    -0.17     0.23 1.00    10467     2976
a_id.f219     0.07      0.10    -0.14     0.27 1.00     9617     2229
a_id.f220     1.01      0.11     0.79     1.22 1.00     9917     2500
a_id.f221    -0.26      0.11    -0.48    -0.04 1.00    11306     2733
a_id.f222     0.16      0.10    -0.04     0.37 1.00     8358     2780
a_id.f223    -0.12      0.12    -0.36     0.12 1.00     9828     2595
a_id.f224    -0.24      0.12    -0.48    -0.01 1.00     9876     2741
a_id.f225    -0.09      0.12    -0.32     0.14 1.00     9183     2742
a_id.f226    -0.74      0.14    -1.02    -0.46 1.00    11079     2789
a_id.f227    -0.06      0.15    -0.37     0.25 1.00     8192     2470
a_id.f228     0.44      0.14     0.17     0.72 1.00     9356     2342
a_id.f229     0.52      0.14     0.24     0.80 1.00    10801     2699
a_id.f230    -0.42      0.14    -0.71    -0.14 1.00    10968     2590
a_id.f231     0.59      0.16     0.28     0.90 1.00    10388     2354
a_id.f232     0.60      0.12     0.37     0.83 1.00    10335     2461
a_id.f233     0.39      0.12     0.14     0.63 1.01     9310     2710
a_id.f234     0.90      0.17     0.55     1.23 1.00     9510     2748
a_id.f236    -1.32      0.14    -1.58    -1.06 1.00     9147     2398
a_id.f237    -0.63      0.15    -0.91    -0.34 1.00     9507     2838
a_id.f238     1.04      0.15     0.76     1.32 1.00    10593     2814
a_id.f239     0.55      0.18     0.20     0.90 1.01    10202     2706

Further Distributional Parameters:
      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma     0.60      0.00     0.59     0.61 1.00     7117     2734

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

This is inefficient, in that the model treat each person as entirely separate. Let’s try a PARTIAL POOLING model.

\[\begin{align*} \text{PA.std}_i &\sim \text{Normal}(\mu_i, \sigma) \\ \mu_i &= \alpha_{\text{id}[i]} \\ \alpha_j &\sim \text{Normal}(\bar{\alpha}, \sigma_{\alpha}) \text{ for j in 1...239}\\ \bar{\alpha} &\sim \text{Normal}(0, 1.5)\\ \sigma_{\alpha} &\sim \text{Exponential}(1) \\ \sigma &\sim \text{Exponential}(1) \\ \end{align*}\]

m2 <- brm(
  data=d,
  family=gaussian,
  PA.std ~ 1 + (1 | id), 
  prior = c( prior(normal(0, 1.5), class=Intercept),
             prior(exponential(1), class=sd),
             prior(exponential(1), class=sigma)),
  iter=4000, warmup=1000, chains=4, cores=4, seed=9, #running this longer so it mixes
  control = list(adapt_delta =.99),
  file=here("files/models/m71.2")
)

You can download my model file here.

Despite the additional complexity, we’ll see that our partial pooling model has both a better fit and fewer effective parameters (i.e., is less complex) than our no pooling model.

m1 <- add_criterion(m1, criterion = "loo")
m2 <- add_criterion(m2, criterion = "loo")

loo_compare(m1, m2) %>% print(simplify=F)
   elpd_diff se_diff  elpd_loo se_elpd_loo p_loo    se_p_loo looic    se_looic
m1      0.0       0.0 -11956.0    110.5       202.1      4.4  23911.9    221.1
m2      0.0       0.9 -11956.0    110.5       201.8      4.4  23911.9    221.1

Let’s visualize the differences in these.

Code
nd1 = distinct(d, id.f)
post1 = epred_draws(m1, nd1)
nd2 = distinct(d, id)
post2 = epred_draws(m2, nd2)
p1 = post1 %>% 
  ggplot( aes(y=.epred, x=id.f) ) +
  stat_gradientinterval() +
  scale_x_discrete(labels=NULL, breaks=NULL) +
  labs(x="id", y="PA.std", title = "no pooling")

p2 = post2 %>% 
  mutate(id=as.factor(id)) %>% 
  ggplot( aes(y=.epred, x=id) ) +
  stat_gradientinterval() +
  scale_x_discrete(labels=NULL, breaks=NULL) +
  labs(x="id", y="PA.std", title = "partial pooling")

p1 / p2
Code
means1 = post1 %>% 
  mean_qi(.epred)
means2 = post2 %>% 
  mean_qi(.epred) %>% 
  mutate(id=as.factor(id))

means1 %>% 
  ggplot( aes(x=id.f, y=.epred)) +
  geom_hline( aes(yintercept=mean(.epred)),
              linetype="dashed") +
  geom_point( aes(color="no pooling") ) +
  geom_point( aes(x=id, color="partial pooling"),
              data=means2,
              size=2,
              alpha=.4) +
  scale_color_manual( values=c("#e07a5f", "#1c5253") ) +
  scale_x_discrete(breaks=NULL) +
  labs(x="id", y="PA.std")+
  theme(legend.position = "top")

The amount of pooling is affected by the number of data points – lots of data for each cluster = less pooling.

Code
d_small = d %>% 
  with_groups(id, 
              sample_n,
              size = 3)
m1s <- brm(
  data=d_small,
  family=gaussian,
  bf(PA.std ~ 0 + a,
     a ~ 0 + id.f,
     nl = TRUE),
  prior = c( prior(normal(0, 1.5), class=b, nlpar=a),
             prior(exponential(1), class=sigma)),
  iter=2000, warmup=1000, chains=4, cores=4, seed=9,
  file=here("files/models/m71.1s")
)

m2s <- brm(
  data=d_small,
  family=gaussian,
  PA.std ~ 1 + (1 | id), 
  prior = c( prior(normal(0, 1.5), class=Intercept),
             prior(exponential(1), class=sd),
             prior(exponential(1), class=sigma)),
  iter=4000, warmup=1000, chains=4, cores=4, seed=9, #running this longer so it mixes
  file=here("files/models/m71.2s")
)

nd1 = distinct(d_small, id.f)
post1 = epred_draws(m1s, nd1)
nd2 = distinct(d_small, id)
post2 = epred_draws(m2s, nd2)

means1 = post1 %>% 
  mean_qi(.epred)
means2 = post2 %>% 
  mean_qi(.epred) %>% 
  mutate(id=as.factor(id))

means1 %>% 
  ggplot( aes(x=id.f, y=.epred)) +
  geom_hline( aes(yintercept=mean(.epred)),
              linetype="dashed") +
  geom_point( aes(color="no pooling") ) +
  geom_point( aes(x=id, color="partial pooling"),
              data=means2,
              size=2,
              alpha=.4) +
  scale_color_manual( values=c("#e07a5f", "#1c5253") ) +
  scale_x_discrete(breaks=NULL) +
  labs(x="id", y="PA.std", title="3 observations per person")+
  theme(legend.position = "bottom")

You can download my model file here. You can download my model file here.

m2
 Family: gaussian 
  Links: mu = identity; sigma = identity 
Formula: PA.std ~ 1 + (1 | id) 
   Data: d (Number of observations: 13033) 
  Draws: 4 chains, each with iter = 4000; warmup = 1000; thin = 1;
         total post-warmup draws = 12000

Multilevel Hyperparameters:
~id (Number of levels: 193) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.80      0.04     0.73     0.89 1.02      431      563

Regression Coefficients:
          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept     0.00      0.06    -0.11     0.12 1.05      123      352

Further Distributional Parameters:
      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma     0.60      0.00     0.59     0.61 1.00    15924     8655

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

Wait, where are all my estimates?

Let’s look closer at these models. How many parameters does each model have?

get_variables(m1) %>% length()
[1] 202
get_variables(m2) %>% length()
[1] 205

What additional parameters?

m1 has a unique intercept for each participant and a standard deviation of scores (1 \(\sigma\)).

m2 is estimating all of that plus a grand mean intercept and the variability of means (\(\sigma_M\)).

(what’s the extra one? brms lists the intercept twice. *shrug emoji*)

There’s both an estimate for an overall mean (b_Intercept) as well as the deviations of each person’s mean from that overall (r_id[#, Intercept]).

get_variables(m2)
  [1] "b_Intercept"         "sd_id__Intercept"    "sigma"              
  [4] "Intercept"           "r_id[1,Intercept]"   "r_id[2,Intercept]"  
  [7] "r_id[3,Intercept]"   "r_id[4,Intercept]"   "r_id[5,Intercept]"  
 [10] "r_id[6,Intercept]"   "r_id[7,Intercept]"   "r_id[8,Intercept]"  
 [13] "r_id[9,Intercept]"   "r_id[10,Intercept]"  "r_id[11,Intercept]" 
 [16] "r_id[12,Intercept]"  "r_id[13,Intercept]"  "r_id[14,Intercept]" 
 [19] "r_id[15,Intercept]"  "r_id[16,Intercept]"  "r_id[17,Intercept]" 
 [22] "r_id[20,Intercept]"  "r_id[23,Intercept]"  "r_id[24,Intercept]" 
 [25] "r_id[25,Intercept]"  "r_id[26,Intercept]"  "r_id[27,Intercept]" 
 [28] "r_id[28,Intercept]"  "r_id[29,Intercept]"  "r_id[30,Intercept]" 
 [31] "r_id[31,Intercept]"  "r_id[32,Intercept]"  "r_id[33,Intercept]" 
 [34] "r_id[34,Intercept]"  "r_id[35,Intercept]"  "r_id[36,Intercept]" 
 [37] "r_id[37,Intercept]"  "r_id[38,Intercept]"  "r_id[39,Intercept]" 
 [40] "r_id[40,Intercept]"  "r_id[41,Intercept]"  "r_id[42,Intercept]" 
 [43] "r_id[43,Intercept]"  "r_id[44,Intercept]"  "r_id[46,Intercept]" 
 [46] "r_id[47,Intercept]"  "r_id[48,Intercept]"  "r_id[49,Intercept]" 
 [49] "r_id[50,Intercept]"  "r_id[51,Intercept]"  "r_id[52,Intercept]" 
 [52] "r_id[53,Intercept]"  "r_id[54,Intercept]"  "r_id[55,Intercept]" 
 [55] "r_id[56,Intercept]"  "r_id[58,Intercept]"  "r_id[60,Intercept]" 
 [58] "r_id[64,Intercept]"  "r_id[65,Intercept]"  "r_id[67,Intercept]" 
 [61] "r_id[68,Intercept]"  "r_id[69,Intercept]"  "r_id[70,Intercept]" 
 [64] "r_id[72,Intercept]"  "r_id[73,Intercept]"  "r_id[75,Intercept]" 
 [67] "r_id[76,Intercept]"  "r_id[80,Intercept]"  "r_id[81,Intercept]" 
 [70] "r_id[82,Intercept]"  "r_id[83,Intercept]"  "r_id[85,Intercept]" 
 [73] "r_id[86,Intercept]"  "r_id[88,Intercept]"  "r_id[91,Intercept]" 
 [76] "r_id[92,Intercept]"  "r_id[94,Intercept]"  "r_id[95,Intercept]" 
 [79] "r_id[96,Intercept]"  "r_id[97,Intercept]"  "r_id[98,Intercept]" 
 [82] "r_id[99,Intercept]"  "r_id[100,Intercept]" "r_id[101,Intercept]"
 [85] "r_id[102,Intercept]" "r_id[104,Intercept]" "r_id[105,Intercept]"
 [88] "r_id[106,Intercept]" "r_id[107,Intercept]" "r_id[109,Intercept]"
 [91] "r_id[110,Intercept]" "r_id[111,Intercept]" "r_id[112,Intercept]"
 [94] "r_id[113,Intercept]" "r_id[114,Intercept]" "r_id[115,Intercept]"
 [97] "r_id[116,Intercept]" "r_id[117,Intercept]" "r_id[119,Intercept]"
[100] "r_id[120,Intercept]" "r_id[121,Intercept]" "r_id[123,Intercept]"
[103] "r_id[125,Intercept]" "r_id[127,Intercept]" "r_id[128,Intercept]"
[106] "r_id[129,Intercept]" "r_id[132,Intercept]" "r_id[133,Intercept]"
[109] "r_id[134,Intercept]" "r_id[135,Intercept]" "r_id[136,Intercept]"
[112] "r_id[137,Intercept]" "r_id[138,Intercept]" "r_id[139,Intercept]"
[115] "r_id[140,Intercept]" "r_id[142,Intercept]" "r_id[143,Intercept]"
[118] "r_id[144,Intercept]" "r_id[145,Intercept]" "r_id[147,Intercept]"
[121] "r_id[148,Intercept]" "r_id[149,Intercept]" "r_id[150,Intercept]"
[124] "r_id[151,Intercept]" "r_id[152,Intercept]" "r_id[153,Intercept]"
[127] "r_id[154,Intercept]" "r_id[155,Intercept]" "r_id[157,Intercept]"
[130] "r_id[158,Intercept]" "r_id[160,Intercept]" "r_id[162,Intercept]"
[133] "r_id[163,Intercept]" "r_id[164,Intercept]" "r_id[165,Intercept]"
[136] "r_id[166,Intercept]" "r_id[167,Intercept]" "r_id[168,Intercept]"
[139] "r_id[169,Intercept]" "r_id[170,Intercept]" "r_id[171,Intercept]"
[142] "r_id[173,Intercept]" "r_id[174,Intercept]" "r_id[176,Intercept]"
[145] "r_id[177,Intercept]" "r_id[178,Intercept]" "r_id[181,Intercept]"
[148] "r_id[182,Intercept]" "r_id[183,Intercept]" "r_id[184,Intercept]"
[151] "r_id[185,Intercept]" "r_id[186,Intercept]" "r_id[188,Intercept]"
[154] "r_id[189,Intercept]" "r_id[190,Intercept]" "r_id[191,Intercept]"
[157] "r_id[194,Intercept]" "r_id[195,Intercept]" "r_id[196,Intercept]"
[160] "r_id[197,Intercept]" "r_id[198,Intercept]" "r_id[199,Intercept]"
[163] "r_id[201,Intercept]" "r_id[202,Intercept]" "r_id[203,Intercept]"
[166] "r_id[204,Intercept]" "r_id[205,Intercept]" "r_id[206,Intercept]"
[169] "r_id[207,Intercept]" "r_id[208,Intercept]" "r_id[209,Intercept]"
[172] "r_id[211,Intercept]" "r_id[212,Intercept]" "r_id[213,Intercept]"
[175] "r_id[214,Intercept]" "r_id[216,Intercept]" "r_id[217,Intercept]"
[178] "r_id[219,Intercept]" "r_id[220,Intercept]" "r_id[221,Intercept]"
[181] "r_id[222,Intercept]" "r_id[223,Intercept]" "r_id[224,Intercept]"
[184] "r_id[225,Intercept]" "r_id[226,Intercept]" "r_id[227,Intercept]"
[187] "r_id[228,Intercept]" "r_id[229,Intercept]" "r_id[230,Intercept]"
[190] "r_id[231,Intercept]" "r_id[232,Intercept]" "r_id[233,Intercept]"
[193] "r_id[234,Intercept]" "r_id[236,Intercept]" "r_id[237,Intercept]"
[196] "r_id[238,Intercept]" "r_id[239,Intercept]" "lprior"             
[199] "lp__"                "accept_stat__"       "stepsize__"         
[202] "treedepth__"         "n_leapfrog__"        "divergent__"        
[205] "energy__"           

You can combine these to get each participant’s model-implied mean.

m2 %>% spread_draws(b_Intercept, r_id[id, Intercept]) %>% 
  mutate(PA = b_Intercept + r_id) 
# A tibble: 2,316,000 × 8
# Groups:   id, Intercept [193]
   .chain .iteration .draw b_Intercept    id Intercept    r_id     PA
    <int>      <int> <int>       <dbl> <int> <chr>       <dbl>  <dbl>
 1      1          1     1      0.0269     1 Intercept  0.0819  0.109
 2      1          1     1      0.0269     2 Intercept  1.88    1.91 
 3      1          1     1      0.0269     3 Intercept -0.188  -0.161
 4      1          1     1      0.0269     4 Intercept -0.380  -0.353
 5      1          1     1      0.0269     5 Intercept  1.17    1.20 
 6      1          1     1      0.0269     6 Intercept -0.433  -0.406
 7      1          1     1      0.0269     7 Intercept  0.102   0.129
 8      1          1     1      0.0269     8 Intercept  2.10    2.12 
 9      1          1     1      0.0269     9 Intercept  0.889   0.916
10      1          1     1      0.0269    10 Intercept  0.339   0.366
# ℹ 2,315,990 more rows

Or (when in doubt) just use the expected predictions function (see how they’re the same!):

nd = distinct(d, id)
add_epred_draws(nd, m2)
# A tibble: 2,316,000 × 6
# Groups:   id, .row [193]
      id  .row .chain .iteration .draw  .epred
   <int> <int>  <int>      <int> <int>   <dbl>
 1     1     1     NA         NA     1  0.109 
 2     1     1     NA         NA     2  0.208 
 3     1     1     NA         NA     3 -0.0119
 4     1     1     NA         NA     4  0.203 
 5     1     1     NA         NA     5 -0.0400
 6     1     1     NA         NA     6  0.132 
 7     1     1     NA         NA     7  0.0361
 8     1     1     NA         NA     8  0.116 
 9     1     1     NA         NA     9  0.0561
10     1     1     NA         NA    10  0.0557
# ℹ 2,315,990 more rows

Let’s see too how this model has decomposed the within- and between-person variance.

# original sd and variance
c("sd" = sd(d$PA.std), "var" = sd(d$PA.std)^2) %>% round(3)
   sd   var 
1.024 1.049 
# model estimates of sd between and within
m2 %>% spread_draws(sd_id__Intercept, sigma) 
# A tibble: 12,000 × 5
   .chain .iteration .draw sd_id__Intercept sigma
    <int>      <int> <int>            <dbl> <dbl>
 1      1          1     1            0.886 0.600
 2      1          2     2            0.872 0.604
 3      1          3     3            0.871 0.600
 4      1          4     4            0.887 0.602
 5      1          5     5            0.892 0.601
 6      1          6     6            0.886 0.606
 7      1          7     7            0.846 0.599
 8      1          8     8            0.847 0.601
 9      1          9     9            0.849 0.605
10      1         10    10            0.844 0.607
# ℹ 11,990 more rows
m2 %>% spread_draws(sd_id__Intercept, sigma) %>% 
  rename(
    between = sd_id__Intercept,
    within = sigma
  ) %>% 
  ggplot( aes(x=between, y=within) ) +
  geom_point(alpha=.7)

\[\begin{align*} \text{PA.std}_{ij} &\sim \text{Normal}(\mu_i, \sigma) \\ \mu_i &= \alpha_{id[i]} \\ \alpha_{id[i]} &\sim \text{Normal}(\bar{\alpha}, 1) \text{ for i in } 1...239\\ \bar{\alpha} &\sim \text{Normal}(.50,.25) \\ \sigma &\sim \text{Exponential}(1) \\ \end{align*}\]

Another way to write out the expected value of \(\mu_j\) is:

\[ \mu_j = \bar{\alpha} + \alpha_{i[ID]} \]

Where \(\bar{\alpha}\) is the mean of the means and \(\alpha_{i[ID]}\) is the deviation of each person from the grand mean. There will be one \(\bar{\alpha}\) for the entire sample, but a different \(\alpha_{i[ID]}\) for each person. Because there are multiple values for each person, we can calculate the variability of those \(\alpha\)’s, as well as the residual variability within each person.

\[\begin{align*} \text{PA.std}_{ij} &\sim \text{Normal}(\mu_j, \sigma) \\ \mu_i &= \bar{\alpha} + \alpha_{id[j]} \\ \bar{\alpha} &\sim \text{Normal}(.50,.25) \\ \sigma &\sim \text{Exponential}(1) \\ \sigma_{\alpha} &\sim \text{Exponential}(1) \\ \end{align*}\]

In this reparameterization, we no longer need a prior for each of the person-means. That’s because they must have a mean of 0. Instead, we only need to estimate the variability \((\sigma_{\alpha})\) of these deviations.

In other words, this model is analogous to an ANOVA, in which we calculate both within- and between-group (aka person) variability. Therefore, we’ll have a standard deviation for both levels.

Code
spread_draws(m2, b_Intercept, r_id[id, term]) %>% 
  mutate(p_mean = b_Intercept + r_id) %>% 
  filter(id %in% sample_id) %>% 
  ggplot(aes(x = id, y = p_mean)) + 
  geom_hline( aes(yintercept = mean(d$PA.std)),
              linetype="dashed") +
  stat_halfeye(alpha =.5) 

In this figure, we can visualize three approaches to pooling:

  • NO POOLING: each person’s mean is calculated from only their own data, there is no sharing of information. (black dots)
  • COMPLETE POOLING: any differences between people are just random noise. The mean of the whole group is the only estimate and is assumed to apply equally to each person (dashed line).
  • PARTIAL POOLING: information is shared across participants, but they are assumed to be different. We regularize estimates of person-specific means towards the grand mean. The more information we have on a person, the less their estimate is regularized.
Code
actual_means = d %>% 
  with_groups(id, summarise, p_mean = mean(PA.std)) %>% 
  mutate()

spread_draws(m2, b_Intercept, r_id[id, term]) %>% 
  mutate(p_mean = b_Intercept + r_id) %>% 
  mean_qi(p_mean) %>% 
  filter(id %in% sample_id) %>% 
  ggplot( aes(x=id, y=p_mean) ) +
  geom_hline( aes(yintercept = mean(d$PA.std)),
              linetype="dashed") +
  geom_point( size=2, color="#e07a5f") + 
  geom_point( data=filter(actual_means, 
                          id %in% sample_id)) +
  labs(x="id",y="PA.std")

writing our mulitilevel model

It’s common to write out mulitlevel models using formulas for the different levels. Level 1 is the level of your outcome, or the thing that repeats. Level 2 is the level of your groups.

\[\begin{align*} \text{Level 1} &\\ \text{PA.std}_{ij} &= \alpha_j + \epsilon_{ij} \\ \text{Level 2} &\\ \alpha_j &= \gamma_0 + U_j \end{align*}\]

Some refer to \(U_j\) as a “random” or “varying” effect because it varies across groups. \(\gamma_0\) is therefore a “fixed” or “non-varying” effect because it applies to all groups.

drawing from the posterior

We’ve already seen how we can use spread_draws to get our parameter estimates.

m2 %>% spread_draws(b_Intercept, sigma, sd_id__Intercept) %>% head
# A tibble: 6 × 6
  .chain .iteration .draw b_Intercept sigma sd_id__Intercept
   <int>      <int> <int>       <dbl> <dbl>            <dbl>
1      1          1     1      0.0269 0.600            0.886
2      1          2     2      0.0313 0.604            0.872
3      1          3     3      0.0264 0.600            0.871
4      1          4     4      0.0217 0.602            0.887
5      1          5     5      0.0318 0.601            0.892
6      1          6     6      0.0273 0.606            0.886
m2 %>% spread_draws(r_id[id, term]) %>% head
# A tibble: 6 × 6
# Groups:   id, term [1]
     id term         r_id .chain .iteration .draw
  <int> <chr>       <dbl>  <int>      <int> <int>
1     1 Intercept  0.0819      1          1     1
2     1 Intercept  0.176       1          2     2
3     1 Intercept -0.0383      1          3     3
4     1 Intercept  0.182       1          4     4
5     1 Intercept -0.0718      1          5     5
6     1 Intercept  0.104       1          6     6

We can get expected values (means) for individuals in our sample.

set.seed(9)
sample_id = sample(unique(d$id), replace=F, size=3)

nd = data.frame(id=sample_id, id.f = sample_id)

add_epred_draws(newdata=nd, object=m2) %>% 
  mean_qi()
# A tibble: 3 × 9
     id  id.f  .row .epred .lower .upper .width .point .interval
  <int> <int> <int>  <dbl>  <dbl>  <dbl>  <dbl> <chr>  <chr>    
1     6     6     3 -0.525 -0.664 -0.387   0.95 mean   qi       
2    60    60     2  0.334  0.213  0.454   0.95 mean   qi       
3   232   232     1  0.594  0.366  0.820   0.95 mean   qi       

We can get predicted values (scores) for individuals in our sample.

add_predicted_draws(newdata=nd, object=m2) %>% 
  mean_qi()
# A tibble: 3 × 9
     id  id.f  .row .prediction .lower .upper .width .point .interval
  <int> <int> <int>       <dbl>  <dbl>  <dbl>  <dbl> <chr>  <chr>    
1     6     6     3      -0.531 -1.72   0.632   0.95 mean   qi       
2    60    60     2       0.324 -0.886  1.52    0.95 mean   qi       
3   232   232     1       0.586 -0.600  1.77    0.95 mean   qi       

expected values vs predicted values

Code
expected = add_epred_draws(newdata=nd, object=m2)
predicted = add_predicted_draws(newdata=nd, object=m2)

full_join(expected, predicted) %>% 
  filter(.draw <= 100) %>% 
  ungroup() %>% 
  ggplot( aes(x=id, y=.epred)) +
  stat_halfeye( aes(fill="expected"),
                alpha=.3) +
  stat_halfeye( aes(y =.prediction, fill="predicted"), 
                alpha=.3 ) +
  scale_fill_manual( values=c("#e07a5f","#1c5253") ) +
  labs(x="id", y="PA.std", fill="draw") +
  scale_x_continuous(breaks=sample_id) +
  theme(legend.position = "top")

Finally, we can get expected values for new individuals

Code
new_id = max(d$id) + seq(1:3)

nd = data.frame(id=new_id)

add_epred_draws(newdata=nd, object=m2, 
                allow_new_levels=T) %>% 
  ggplot( aes(x=id, y=.epred) ) +
  stat_halfeye()

adding covariates

Let’s add time to our model. Because time varies (can change) within person from assessment to assessment, this is a Level 1 variable. Note that I have NOT added a varying component to Level 2 – in other words, I’m stating that the effect of time on positive affect is fixed or identical across participants.

\[\begin{align*} \text{Level 1} &\\ \text{PA.std}_{ij} &= \alpha_i + \beta_i(\text{day}_{ij}) + \epsilon_{ij} \\ \text{Level 2} &\\ \alpha_j &= \gamma_0 + U_{0j} \\ \beta_j &= \gamma_1 \\ \end{align*}\]

m3 <- brm(
  data=d,
  family=gaussian,
  PA.std ~ 1 + day + (1 | id), 
  prior = c( prior(normal(.50, .25), class=Intercept),
             prior(normal(0, 1), class=b), #new prior for new term
             prior(exponential(1), class=sd),
             prior(exponential(1), class=sigma)),
  iter=2000, warmup=1000, chains=4, cores=4, seed=9,
  file=here("files/models/m71.3")
)

You can download my model file here.

m3
 Family: gaussian 
  Links: mu = identity; sigma = identity 
Formula: PA.std ~ 1 + day + (1 | id) 
   Data: d (Number of observations: 13033) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Multilevel Hyperparameters:
~id (Number of levels: 193) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.80      0.04     0.73     0.88 1.04      110      285

Regression Coefficients:
          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept     0.13      0.05     0.02     0.23 1.08       47      188
day          -0.00      0.00    -0.00    -0.00 1.00     4550     3272

Further Distributional Parameters:
      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma     0.60      0.00     0.59     0.60 1.00     6598     2554

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

There are different intercepts for each participant, but not different slopes.

get_variables(m3)
  [1] "b_Intercept"         "b_day"               "sd_id__Intercept"   
  [4] "sigma"               "Intercept"           "r_id[1,Intercept]"  
  [7] "r_id[2,Intercept]"   "r_id[3,Intercept]"   "r_id[4,Intercept]"  
 [10] "r_id[5,Intercept]"   "r_id[6,Intercept]"   "r_id[7,Intercept]"  
 [13] "r_id[8,Intercept]"   "r_id[9,Intercept]"   "r_id[10,Intercept]" 
 [16] "r_id[11,Intercept]"  "r_id[12,Intercept]"  "r_id[13,Intercept]" 
 [19] "r_id[14,Intercept]"  "r_id[15,Intercept]"  "r_id[16,Intercept]" 
 [22] "r_id[17,Intercept]"  "r_id[20,Intercept]"  "r_id[23,Intercept]" 
 [25] "r_id[24,Intercept]"  "r_id[25,Intercept]"  "r_id[26,Intercept]" 
 [28] "r_id[27,Intercept]"  "r_id[28,Intercept]"  "r_id[29,Intercept]" 
 [31] "r_id[30,Intercept]"  "r_id[31,Intercept]"  "r_id[32,Intercept]" 
 [34] "r_id[33,Intercept]"  "r_id[34,Intercept]"  "r_id[35,Intercept]" 
 [37] "r_id[36,Intercept]"  "r_id[37,Intercept]"  "r_id[38,Intercept]" 
 [40] "r_id[39,Intercept]"  "r_id[40,Intercept]"  "r_id[41,Intercept]" 
 [43] "r_id[42,Intercept]"  "r_id[43,Intercept]"  "r_id[44,Intercept]" 
 [46] "r_id[46,Intercept]"  "r_id[47,Intercept]"  "r_id[48,Intercept]" 
 [49] "r_id[49,Intercept]"  "r_id[50,Intercept]"  "r_id[51,Intercept]" 
 [52] "r_id[52,Intercept]"  "r_id[53,Intercept]"  "r_id[54,Intercept]" 
 [55] "r_id[55,Intercept]"  "r_id[56,Intercept]"  "r_id[58,Intercept]" 
 [58] "r_id[60,Intercept]"  "r_id[64,Intercept]"  "r_id[65,Intercept]" 
 [61] "r_id[67,Intercept]"  "r_id[68,Intercept]"  "r_id[69,Intercept]" 
 [64] "r_id[70,Intercept]"  "r_id[72,Intercept]"  "r_id[73,Intercept]" 
 [67] "r_id[75,Intercept]"  "r_id[76,Intercept]"  "r_id[80,Intercept]" 
 [70] "r_id[81,Intercept]"  "r_id[82,Intercept]"  "r_id[83,Intercept]" 
 [73] "r_id[85,Intercept]"  "r_id[86,Intercept]"  "r_id[88,Intercept]" 
 [76] "r_id[91,Intercept]"  "r_id[92,Intercept]"  "r_id[94,Intercept]" 
 [79] "r_id[95,Intercept]"  "r_id[96,Intercept]"  "r_id[97,Intercept]" 
 [82] "r_id[98,Intercept]"  "r_id[99,Intercept]"  "r_id[100,Intercept]"
 [85] "r_id[101,Intercept]" "r_id[102,Intercept]" "r_id[104,Intercept]"
 [88] "r_id[105,Intercept]" "r_id[106,Intercept]" "r_id[107,Intercept]"
 [91] "r_id[109,Intercept]" "r_id[110,Intercept]" "r_id[111,Intercept]"
 [94] "r_id[112,Intercept]" "r_id[113,Intercept]" "r_id[114,Intercept]"
 [97] "r_id[115,Intercept]" "r_id[116,Intercept]" "r_id[117,Intercept]"
[100] "r_id[119,Intercept]" "r_id[120,Intercept]" "r_id[121,Intercept]"
[103] "r_id[123,Intercept]" "r_id[125,Intercept]" "r_id[127,Intercept]"
[106] "r_id[128,Intercept]" "r_id[129,Intercept]" "r_id[132,Intercept]"
[109] "r_id[133,Intercept]" "r_id[134,Intercept]" "r_id[135,Intercept]"
[112] "r_id[136,Intercept]" "r_id[137,Intercept]" "r_id[138,Intercept]"
[115] "r_id[139,Intercept]" "r_id[140,Intercept]" "r_id[142,Intercept]"
[118] "r_id[143,Intercept]" "r_id[144,Intercept]" "r_id[145,Intercept]"
[121] "r_id[147,Intercept]" "r_id[148,Intercept]" "r_id[149,Intercept]"
[124] "r_id[150,Intercept]" "r_id[151,Intercept]" "r_id[152,Intercept]"
[127] "r_id[153,Intercept]" "r_id[154,Intercept]" "r_id[155,Intercept]"
[130] "r_id[157,Intercept]" "r_id[158,Intercept]" "r_id[160,Intercept]"
[133] "r_id[162,Intercept]" "r_id[163,Intercept]" "r_id[164,Intercept]"
[136] "r_id[165,Intercept]" "r_id[166,Intercept]" "r_id[167,Intercept]"
[139] "r_id[168,Intercept]" "r_id[169,Intercept]" "r_id[170,Intercept]"
[142] "r_id[171,Intercept]" "r_id[173,Intercept]" "r_id[174,Intercept]"
[145] "r_id[176,Intercept]" "r_id[177,Intercept]" "r_id[178,Intercept]"
[148] "r_id[181,Intercept]" "r_id[182,Intercept]" "r_id[183,Intercept]"
[151] "r_id[184,Intercept]" "r_id[185,Intercept]" "r_id[186,Intercept]"
[154] "r_id[188,Intercept]" "r_id[189,Intercept]" "r_id[190,Intercept]"
[157] "r_id[191,Intercept]" "r_id[194,Intercept]" "r_id[195,Intercept]"
[160] "r_id[196,Intercept]" "r_id[197,Intercept]" "r_id[198,Intercept]"
[163] "r_id[199,Intercept]" "r_id[201,Intercept]" "r_id[202,Intercept]"
[166] "r_id[203,Intercept]" "r_id[204,Intercept]" "r_id[205,Intercept]"
[169] "r_id[206,Intercept]" "r_id[207,Intercept]" "r_id[208,Intercept]"
[172] "r_id[209,Intercept]" "r_id[211,Intercept]" "r_id[212,Intercept]"
[175] "r_id[213,Intercept]" "r_id[214,Intercept]" "r_id[216,Intercept]"
[178] "r_id[217,Intercept]" "r_id[219,Intercept]" "r_id[220,Intercept]"
[181] "r_id[221,Intercept]" "r_id[222,Intercept]" "r_id[223,Intercept]"
[184] "r_id[224,Intercept]" "r_id[225,Intercept]" "r_id[226,Intercept]"
[187] "r_id[227,Intercept]" "r_id[228,Intercept]" "r_id[229,Intercept]"
[190] "r_id[230,Intercept]" "r_id[231,Intercept]" "r_id[232,Intercept]"
[193] "r_id[233,Intercept]" "r_id[234,Intercept]" "r_id[236,Intercept]"
[196] "r_id[237,Intercept]" "r_id[238,Intercept]" "r_id[239,Intercept]"
[199] "lprior"              "lp__"                "accept_stat__"      
[202] "stepsize__"          "treedepth__"         "n_leapfrog__"       
[205] "divergent__"         "energy__"           
Code
set.seed(9)
sample_id = sample(unique(d$id), replace=F, size = 20)
distinct(d, id, day) %>% 
  filter(id %in% sample_id) %>% 
  add_epred_draws(m3) %>% 
  mean_qi() %>% 
  ggplot( aes(x=day, y=.epred, color=as.factor(id))) +
  geom_line() +
  guides(color="none") +
  labs(x="day",y="PA.std")